Maximum Entropy Discrimination
نویسندگان
چکیده
Tony Jebara MIT Media Lab 20 Ames St. Cambridge, MA 02139 jebara@media. mit. edu We present a general framework for discriminative estimation based on the maximum entropy principle and its extensions. All calculations involve distributions over structures and/or parameters rather than specific settings and reduce to relative entropy projections. This holds even when the data is not separable within the chosen parametric class, in the context of anomaly detection rather than classification, or when the labels in the training set are uncertain or incomplete. Support vector machines are naturally subsumed under this class and we provide several extensions. We are also able to estimate exactly and efficiently discriminative distributions over tree structures of class-conditional models within this framework. Preliminary experimental results are indicative of the potential in these techniques.
منابع مشابه
Multi-View Maximum Entropy Discrimination
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on the well known maximum entropy principle, which embodies the Bayesian integration of prior information with large margin constraints on observations. It is a successful combination of maximum entropy learning and maximum margin learning, and can subsume support vector machines (SVMs) as a special ...
متن کاملStat 538 Project: Implementation of Maximum Entropy Discrimination with a Linear Discriminant Function
For this project, our goal was to implement Maximum Entropy Discrimination (MED) with a linear discrimination function for binary classification purposes. This technique was presented by Jaakola, Meila, and Jebara in “Maximum Entropy Discrimination” (1999). Their paper provides a generalized framework for discrimination that relies on the maximum entropy principle. One of the interesting aspect...
متن کاملOn Bayesian Inference, Maximum Entropy and Support Vector Machines Methods
The analysis of discrimination, feature and model selection conduct to the discussion of the relationships between Support Vector Machine (SVM), Bayesian and Maximum Entropy (MaxEnt) formalisms. MaxEnt discrimination can be seen as a particular case of Bayesian inference, which at its turn can be seen as a regularization approach applicable to SVM. Probability measures can be attached to each f...
متن کاملMaximum Entropy Discrimination Markov Networks
Standard maximum margin structured prediction methods lack a straightforward probabilistic interpretation of the learning scheme and the prediction rule. Therefore its unique advantages such as dual sparseness and kernel tricks cannot be easily conjoined with the merits of a probabilistic model such as Bayesian regularization, model averaging, and ability to model hidden variables. In this pape...
متن کاملSoft Margin Consistency Based Scalable Multi-View Maximum Entropy Discrimination
Multi-view learning receives increasing interest in recent years to analyze complex data. Lately, multiview maximum entropy discrimination (MVMED) and alternative MVMED (AMVMED) were proposed as extensions of maximum entropy discrimination (MED) to the multi-view learning setting, which use the hard margin consistency principle that enforces two view margins to be the same. In this paper, we pr...
متن کاملReconstruction of Probability Density Functions from Channel Representations
The channel representation allows the construction of soft histograms, where peaks can be detected with a much higher accuracy than in regular hard-binned histograms. This is critical in e.g. reducing the number of bins of generalized Hough transform methods. When applying the maximum entropy method to the channel representation, a minimum-information reconstruction of the underlying continuous...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999